This page includes synopses of some of the most recent published software testing articles and commentary from RBCS experts regarding the software testing industry and other topics. To view the full article, click on the provided link.
Organizing Manual Testing on a Budget by Capers Jones, Vice President and Chief Technology Officer Namcook Analytics LLC
RBCS is pleased to feature a special guest author for our newsletter article, Capers Jones. Capers Jones is, of course, a long-standing force for improving the software engineering industry, and has published a number of books that I consider essential reading for software professionals who seek to truly understand, through data and facts, what happens on software projects. Recently, he published an important book on software quality, The Economics of Software Quality. So, I asked Capers if he'd be willing to contribute a guest article, and he graciously agreed. This article, on software quality today and tomorrow, gives us a sobering view of our current situation, but also provides clear direction on what we need to do to get better. The good news is that we already have many of the tools we need to improve software quality. -- Rex Black
In 2012 large software projects are hazardous business undertakings. More than half of software projects larger than 10,000 function points (about 1,000,000 lines of code) are either cancelled or run late by more than a year.
When examining troubled software projects, it always happens that the main reason for delay or termination is due to excessive volumes of serious defects. Conversely, large software projects that are successful are always characterized by excellence in both defect prevention and defect removal. It can be concluded that achieving state of the art levels of software quality control is the most important single objective of software process improvements.
Quality control is on the critical path for advancing software engineering from its current status as a skilled craft to become a true profession.
How to Pick Testing Tools by Rex Black
Many of us got into technology because we were fascinated by the prospect of using computers to build better ways to get work done. (That and the almost magical way we could command a complex machine to do something simply through the force of words coming off our fingers, into a keyboard, and onto a screen.) Ultimately, those of us who consider ourselves software engineers, like all engineers, are in the business of building useful things.
Of course, engineers need tools. Civil engineers have dump trucks, trenching machines, and graders. Mechanical engineers have CAD/CAM software. And we have integrated development environments (IDEs), configuration management tools, automated unit testing and functional regression testing tools, and more. Many great testing tools are available, and some of them are even free. But just because you can get a tool, doesn't mean that you need the tool.
When you get beyond the geek-factor on some tool, you come to the practical questions: What is the business case for using a tool? There are so many options, but how to I pick one? How should I introduce and deploy the tool? How can I measure the return on investment for the tool? This article will help you uncover answers to these questions as you contemplate tools.
Measuring Confidence along the Dimensions of Test Coverage by Rex Black
When I talk to senior project and product stakeholders outside of test teams, confidence in the system�especially, confidence that it will have a sufficient level of quality�is one benefit they want from a test team involved in system and system integration testing. Another key benefit such stakeholders commonly mention is providing timely, credible information about quality, including our level of confidence in system quality.
Reporting their level of confidence in system quality often proves difficult to many testers. Some testers resort to reporting confidence in terms of their gut feel. Next to major functional areas, they draw smiley faces and frowny faces on a whiteboard, and say things like, "I've got a bad feeling about function XYZ." When management decides to release the product anyway, the hapless testers either suffer the Curse of Cassandra if function XYZ fails in production, or watch their credibility evaporate if there are no problems with function XYZ in production.
How to Build Quality Applications by Rex Black Testing is an excellent means to build confidence in the quality of software before it's deployed in a data center or released to customers. It's good to have confidence before you turn an application loose on the users, but why wait until the end of the project? The most efficient form of quality assurance is building software the right way, right from the start. What can software testing, software quality, and software engineering professionals do, starting with the first day of the project, to deliver quality applications?
Metrics for Software Testing: Managing with Facts Part 4: Product Metrics By Rex Black
In the previous article in this series, we moved from a discussion of process metrics to a discussion of how metrics can help you manage projects. I talked about the use of project metrics to understand the progress of testing on a project, and how to use those metrics to respond and guide the project to the best possible outcome. We looked at the way to use project metrics, and how to avoid the misuse of these metrics.
In this final article in the series, we'll look at one more type of metric. In this article, we examine product metrics. Product metrics are often forgotten, but having good product metrics helps you understand the quality status of the system under test. This article will help you understand how to use product metrics properly. I'll also offer some concluding thoughts on the proper use of metrics in testing, as I wind up this series of articles.
As I wrote above, product metrics help us understand the current quality status of the system under testing. Good testing allows us to measure the quality and the quality risk in a system, but we need proper product metrics to capture those measures. These product metrics provide the insights to guide where product improvements should occur, if the quality is not where it should be (e.g., given the current point on the schedule). As mentioned in the first article in this series, we can talk about metrics as relevant to effectiveness, efficiency, and elegance.
Effectiveness product metrics measure the extent to which the product is achieving desired levels of quality. Efficiency product metrics measure the extent to which a product achieves that desired level of quality results in an economical fashion. Elegance product metrics measure the extent to which a product effectively and efficiently achieves those results in a graceful, well-executed fashion.
Advanced Risk Based Test Results Reporting: Putting Residual Quality Risk Measurement in Motion by Rex Black and Nagata Atsushi
Analytical risk based testing offers a number of benefits to test teams and organizations that use this strategy. One of those benefits is the opportunity to make risk-aware release decisions. However, this benefit requires risk based test results reporting, which many organizations have found particularly challenging. This article describes the basics of risk based testing results reporting, then shows how Rex Black (of RBCS) and Nagata Atsushi (of Sony) developed and implemented new and ground-breaking ways to report test results based on risk.
Testing can be thought of as (one) way to reduce the risks to system quality prior to release. Quality risks typically include possible situations like slow system response to use input, incorrect calculations, corruption of customer data, and difficulty in understanding system interfaces. All testing strategies, competently executed, will reduce quality risks. However, analytical risk based testing, a strategy that allocates testing effort and sequences test execution based on risk, minimizes the level of residual quality risk for any given amount of testing effort.
There are various techniques for risk based testing, including highly formal techniques like Failure Mode and Effect Analysis (FMEA). Most organizations find this technique too difficult to implement, so RBCS typically recommends �and helps clients to implement� a technique called Pragmatic Risk Analysis and Management (PRAM). You can find a case study of PRAM implementation at another large company, CA, here. While this article describes the implementation of the technique for projects following a sequential lifecycle, a similar approach has been implemented by organizations using Agile and iterative lifecycle models.
This article was originally published in Software Test and Quality Assurance www.softwaretestpro.com in their December 2010 edition.
Metrics for Software Testing: Managing with Facts: Part 3: Project Metrics Written by Rex Black
In the previous article in this series, we moved from general observations about metrics to a specific discussion about how metrics can help you manage processes. We talked about the use of metrics to understand and improve test and development process capability with facts. We covered the proper development of process metrics, starting with objectives for the metrics and ultimately setting industry-based goals for those metrics. We looked at how to recognize a good set of process metrics, and trade-offs for those metrics
In this and the next article in the series, we'll look at two more specific types of metrics. In this article, we turn from process to project metrics. Project metrics can help us understand our status in terms of the progress of testing and quality on a project. Understanding current project status is a pre-requisite to rational, fact-driven project management decisions. In this article, you'll learn how to develop, understand, and respond to good project metrics.
Click here to read the article in its entirety.
Metrics for Software Testing: Managing with Facts: Part 2: Process Metrics Written by Rex Black
In the previous article in this series, I offered a number of general observations about metrics, illustrated with examples. We talked about the use of metrics to manage testing and quality with facts. We covered the proper development of metrics, top-down (objective-based) not bottom-up (tools-based). We looked at how to recognize a good set of metrics.
In the next three articles in the series, we'll look at specific types of metrics. In this article, we will take up process metrics. Process metrics can help us understand the quality capability of the software engineering process as well as the testing capability of the software testing process. Understanding these capabilities is a pre-requisite to rational, fact-driven process improvement decisions. In this article, you'll learn how to develop and understand good process metrics.
Click here to read the article in its entirety.
Advanced Risk Based Test Results Reporting: Putting Residual Quality Risk Measurement in Motion Written by Rex Black
Analytical risk based testing offers a number of benefits to test teams and organizations that use this strategy. One of those benefits is the opportunity to make risk-aware release decisions. However, this benefit requires risk based test results reporting, which many organizations have found particularly challenging. This article describes the basics of risk based testing results reporting, then shows how Rex Black (of RBCS) and Nagata Atsushi (of Sony) developed and implemented new and ground-breaking ways to report test results based on risk.
Testing can be thought of as (one) way to reduce the risks to system quality prior to release. Quality risks typically include possible situations like slow system response to use input, incorrect calculations, corruption of customer data, and difficulty in understanding system interfaces. All testing strategies, competently executed, will reduce quality risks. However, analytical risk based testing, a strategy that allocates testing effort and sequences test execution based on risk, minimizes the level of residual quality risk for any given amount of testing effort.
There are various techniques for risk based testing, including highly formal techniques like Failure Mode and Effect Analysis (FMEA). Most organizations find this technique too difficult to implement, so RBCS typically recommends and helps clients to implement a technique called Pragmatic Risk Analysis and Management (PRAM). You can find a case study of PRAM implementation at another large company, CA, at http://www.rbcs-us.com/images/documents/A-Case-Study-in-Risk-Based-Testing.pdf. While this article describes the implementation of the technique for projects following a sequential lifecycle, a similar approach has been implemented by organizations using Agile and iterative lifecycle models.
Click here or to read the article in its entirety or read the original publication in the STP Magazine.
Metrics for Software Testing: Managing with Facts, Part 1: The How and Why of Metrics by Rex Black
At RBCS, a growing part of our consulting business is helping clients with metrics programs. We're always happy to help with such engagements, and I usually try to do the work personally, because I find it so rewarding. What's so great about metrics? Well, when you use metrics to track, control, and manage your testing and quality efforts, you can be confident that you are managing with facts and reality, not opinions and guesswork.
When clients want to get started with metrics, they often have questions. How can we use metrics to manage testing? What metrics can we use to measure the test process? What metrics can we use to measure our progress in testing a project? What do metrics tell us about the quality of the product? We work with clients to answer these questions all the time. In this article, and the next three articles in this series, I'll show you some of the answers.
Critical Testing Processes: An Open Source, Business Driven Framework for Improving the Testing Process by Rex Black
When I wrote my book Critical Testing Processes in the early 2000s, I started with the premise that some test processes are critical, some are not. I designed this lightweight framework for test process improvement in order to focus the test team and test manager on a few test areas that they simply must do properly. This contrasts with the more expansive and complex models inherent in TPI and TMM. In addition, the Critical Testing Processes (CTP) framework eschews the prescriptive elements of TMM and TPI since it does not impose an arbitrary, staged maturity model.
What's the problem with prescriptive models? In my consulting work, I have found that businesses want to make improvements based on the business value of the improvement and the organizational pain that improvement will alleviate. A simplistic maturity rating might lead a business to make improvements in parts of the overall software process or test process that are actually less problematic or less important than other parts of the process simply because the model listed them in order.
CTP is a non-prescriptive process model. It describes the important software processes and what should happen in them, but it doesn't put them in any order of improvement. This makes CTP a very flexible model. It allows you to identify and deal with specific challenges to your test processes. It identifies various attributes of good processes, both quantitative and qualitative. It allows you to use business value and organizational pain to select the order and importance of improvements. It is also adaptable to all software development lifecycle models.
Challenges in Agile Translation
Polish translation
by Rex Black
A popular article translated into Polish! A number of our clients have adopted Scrum and other Agile methodologies. Every software development lifecycle model, from sequential to spiral to Agile, has testing implications. Some of these implications ease the testing process. We don't need to worry about these implications here.
Some of these testing implications challenge testing. In this case study, I discuss those challenges so that our client can understand the issues created by the Scrum methodology, and distinguish those from other types of testing issues that our client faces.
Click here for the English version.
A Few Thoughts on Test Data
by Rex Black
This article is excerpted from Chapter 3 of Rex Black's popular book Managing the Testing Process, 3e.
A number of RBCS clients find that obtaining good test data poses many challenges. For any large-scale system, testers usually cannot create sufficient and sufficiently diverse test data by hand; i.e., one record at a time. While data-generation tools exist and can create almost unlimited amounts of data, the data so generated often do not exhibit the same diversity and distribution of values as production data. For these reasons, many of our clients consider production data ideal for testing, particularly for systems where large sets of records have accumulated over years of use with various revisions of the systems currently in use, and systems previously in use.
However, to use production data, we must preserve privacy. Production data often contains personal data about individuals which must be handled securely. However, requiring secure data handling during testing activities imposes undesirable inefficiencies and constraints. Therefore, many organizations want to anonymize (scramble) the production data prior to using it for testing.
This anonymization process leads to the next set of challenges, though. The anonymization process must occur securely, in the sense that it is not reversible should the data fall into the wrong hands. For example, simply substituting the next digit or the next letter in sequence would be obvious to anyone it doesn't take long to deduce that "Kpio Cspxo" is actually "John Brown"-which makes the de-anonymization process trivial.
Using Domain Analysis for Testing
by Rex Black
Many of you are probably familiar with basic test techniques like equivalence partitioning and boundary value analysis. In this article, Rex presents an advanced technique for black-box testing called domain analysis. Domain analysis is an analytical way to deal with the interaction of factors or variables within the business logic layer of a program. It is appropriate when you have some number of factors to deal with. These factors might be input fields, output fields, database fields, events, or conditions. They should interact to create two or more situations in which the system will process data differently. Those situations are the domains. In each domain, the value of one or more factors influences the values of other factors, the system's outputs, or the processing performed.
In some cases, the number of possible test cases becomes very large due to the number of variables or factors and the potentially interesting test values or options for each variable or factor. For example, suppose you have 10 integer input fields that accept a number from 0 to 99. There are 10 billion billion valid input combinations.
Equivalence class partitioning and boundary value analysis on each field will reduce but not resolve the problem. You have four boundary values for each field. The illegal values are easy, because you have only 20 tests for those. However, to test each legal combination of fields, you have 1,024 test cases. But do you need to do so? And would testing combinations of boundary values necessarily make for good tests? Are there smarter options for dealing with such combinatorial explosions?
This article was originally published in Quality Matters.
Advanced Software Test Design Techniques: Use Cases
by Rex Black
The following is an excerpt from my recently-published book, Advanced Software Testing: Volume 1. This is a book for test analysts and test engineers. It is especially useful for ISTQB Advanced Test Analyst certificate candidates, but contains detailed discussions of test design techniques that any tester can�and should�use. In this third article in a series of excerpts, I discuss the application of use cases to testing workflows.
At the start of this series, I said we would cover three techniques that would prove useful for testing business logic, often more useful than equivalence partitioning and boundary value analysis. First, we covered decision tables, which are best in transactional testing situations. Next, we looked at state-based testing, which is ideal when we have sequences of events that occur and conditions that apply to those events, and the proper handling of a particular event/condition situation depends on the events and conditions that have occurred in the past. In this article, we'll cover use cases, where preconditions and postconditions help to insulate one workflow from the previous workflow and the next workflow. With these three techniques in hand, you have a set of powerful techniques for testing the business logic of a system.
This article was originally published in Testing Experience Magazine. Subscribe today!
Advanced Software Test Design Techniques, State Diagrams, State Tables and Switch Coverage
by Rex Black
In this article, we look at state-based testing. State-based testing is ideal when we have sequences of events that occur and conditions that apply to those events, and the proper handling of a particular event/condition situation depends on the events and conditions that have occurred in the past. In some cases, the sequences of events can be potentially infinite, which of course exceeds our testing capabilities, but we want to have a test design technique that allows us to handle arbitrarily-long sequences of events. Read this article to learn more about state-based testing.
This article was originally published in Testing Experience Magazine. Subscribe today!
Advanced Software Test Design Techniques, Decision Tables and Cause-Effect Graphs
by Rex Black
This article is an excerpt from Rex Black's recently-published book, Advanced Software Testing: Volume 1. This is a book for test analysts and test engineers. It is especially useful for ISTQB Advanced Test Analyst certificate candidates, but contains detailed discussions of test design techniques that any tester can-�and should�-use. In this first article in a series of excerpts, Black starts by discussing the related concepts of decision tables and cause-effect graphs.
Equivalence partitioning and boundary value analysis are very useful techniques. They are especially useful when testing input field validation at the user interface. However, lots of testing that we do as test analysts involves testing the business logic that sits underneath the user interface. We can use boundary values and equivalence partitioning on business logic, too, but three additional techniques, decision tables, use cases, and state-based testing, will often prove handier and more effective. Read this article to learn more about these powerful techniques.
This software testing article was originally published in the June 2009 edition of Testing Experience Magazine.
Risk Based Testing: What It Is and How You Can Benefit
by Rex Black
Rex Black's pioneering Managing the Testing Process was both the first test management book and the first to discuss risk-based testing. In this software testing article, Rex explains: � The benefits of risk-based testing. � Why adding risk analysis to the test team's responsibilities actually reduces their workload. � The importance of stakeholder participation. � Common mistakes that can occur in risk-based testing.
Rex illustrates these points, not through hypothetical discussion, but by examining a case study where RBCS helped a client launch risk-based testing. Read this article to learn how to analyze risks to quality, and use that analysis to be a smarter test professional.
Quality Goes Bananas
by Rex Black, Daniel Derr and Michael Tyszkiewicz
You're familiar with test automation, but what is a dumb monkey? How can it help you automate your testing and explore very large screen flows and input sets? Is it true that you can build dumb monkeys from freeware with no tool budget? What kind of quality risks can dumb monkeys address? Read this article to learn the answers to these and other test automation questions.
How Outsourcing Affects Testing
by Rex Black
This software testing article is excerpted from Chapter 10 of Rex Black's upcoming book Managing the Testing Process, 3e. Over the last twenty years, outsource development of one or more key components in the system has come to dominate software and hardware systems engineering. The trend started in hardware in the 1990s. RBCS clients like Dell, Hitachi, Hewlett Packard, and other computer systems vendors took advantage of cheap yet educated labor overseas to compete effectively in an increasingly commoditized market. By the end of 2002, three years into a spectacular IT downturn that saw computer science enrollments in the United States fall to less than half of their 1999 levels, price had become the primary determinant in most IT project decisions. Mass outsourcing of software projects took hold, and it continues unabated to this day... Read this article to get a picture of the effects of outsourcing software testing.
Intelligent Use of Testing Service Providers by Rex Black
In this software testing article, Rex Black analyzes the use of outsourcing in testing, based on some twenty years of experience with outsourcing of testing in one form or another. First, Mr. Black enumerates the key differences between in-house and outsourced test teams. Next, driven by these key differences, he analyzes which tasks fit better with outsourced testing service providers, followed by a similar analysis for in-house test teams. Then, Mr Black lists some of the technical, managerial, and political challenges that confront a company trying to make effective use of outsourced testing. Finally, he addresses some of the processes needed to use testing service providers effectively and with the least amount of trouble. Read this article to significantly improve your use of outsourced testing.
A Case Study in Successful Risk-Based Testing at CA by Rex Black, Peter Nash and Ken Young
This article presents a case study of a risk-based testing pilot project at CA, the world's leading independent IT management software company. The development team chosen for this pilot is responsible for a widely-used mainframe software product called CA SYSVIEWR Performance Management, an intuitive tool for proactive management and real-time monitoring of z/OS environments. By analyzing a vast array of performance metrics, CA SYSVIEW can help organizations identify and resolve problems quickly. CA piloted risk-based testing as part of our larger effort to ensure the quality of the solutions we deliver. The pilot consisted of six main activities: . Training key stakeholders on risk-based testing . Holding a quality risk analysis session . Analyzing and refining the quality risk analysis . Aligning the testing with the quality risks . Guiding the testing based on risks . Assessing benefits and lessons This article addresses each of these areas - as well as some of the broader issues associated with risk-based testing. Click here to read the version of this software testing article as published in Better Software Testing.
Four Ideas for Improving Software Test Efficiency
"Do more with less. Work smarter not harder. Same coverage, fewer testers." If you're like a lot of testers and test managers, you'll be hearing statements like those a lot in 2009, since we appear headed for another tight economic period. If you need a way to demonstrate quick, measurable efficiency gains in your test operation, read this short article to learn four great ideas that will help you improve your software test efficiency.
A Simplified Automation Solution Using WATIJ by Steven Troy, Jamie Mitchell and Rex Black
One of our clients, CA, has continued to impress us with innovative ways to go about their testing. An upcoming software testing article will discuss how we are helping them institute risk-based testing. Read this article to learn how one of their teams is using a leading-edge open source testing tool, WATIJ, to help contain regression risk.
A Story about User Stories and Test-Driven Development by Gertrud Bj�rnvig, James O. Coplien, and Neil Harrison
Test-Driven Development, or TDD, is a term used for a popular collection of development techniques in wide use in the Agile community. While testing is part of its name, and though it includes tests, and though it fits in that part of the life cycle usually ascribed to unit testing activity, TDD pundits universally insist that it is not a testing technique, but rather a technique that helps one focus one's design thinking. The idea is that you write your tests first, and your code second.
Read this article to explore some subtle pitfalls of TDD that have come out of our experience, our consultancy on real projects (all of the conjectured problems are things we have actually seen in practice), and a bit of that rare commodity called common sense.
The original two-part version of this article about Test Driven Development was published in Better Software Testing. Click here to read the first part and click here for the second part.
The ISTQB Advanced Syllabus: Guiding the Way to Better Software Testing by Rex Black
The International Software Testing Qualification Board (ISTQB) has already effected profound change in the software testing field, with almost 100,000 people having attained Foundation certification. But a Foundation certification is just that: only a Foundation. With the release of the new Advanced syllabus in October 2007, the ISTQB has expanded and improved the next rung on the ladder of test professionalism. In the slides from this tutorial, Rex Black, President of the ISTQB, shows how the ISTQB Advanced syllabus can guide you, your testing colleagues, and your organization toward better testing, reduced risk, and higher quality.
The IT Professional on the Outsourced Project by Rex Black
More and more IT professionals work on projects where some or all of the development is done by third-parties, often overseas. While cost savings make such arrangements attractive to executives, individual contributors and managers on such projects face some significant challenges. What does outsource mean for IT professionals? In this talk, Rex Black offers insights from his extensive involvement in outsource projects �both successful and not-so-successful. Rex will illustrate his points with case studies, and share humorous and scary anecdotes along the way.
The Right Stuff: Four Small Steps for Testers, one Giant leap for Risk Mitigation By Rex Black and Barton Layne
Recently, we worked on a high-risk, high-visibility system where performance testing ("Let's just make sure it handles the load") was the last item on the agenda. As luck would have it, the system didn't handle the load, and very long days and nights ensued. Why it does have to be this way... Read this article about Risk Mitigation to ensure this doesn't happen to you.
Empirix's QAZone with Rex Black By Marina Gil Santamaria
From certification to automation, expert thoughts on where the testing industry is and where it's headed.
Quality Risk Analysis: Which Quality Risks Should We Worry About? By Rex Black
Since it is not possible to test everything, it is necessary to pick a subset of the overall set of tests to be run. Read this article to discover how quality risks analysis can help one focus the test effort.
Component Outsourcing, Quality Risks, and Testing: Factors and Strategies for Project Managers By Rex Black
More and more projects involve more integration of custom developed or commercial-off-the-shelf (COTS) components, rather than in-house development or enhancement of software. In effect, these two approaches constitute direct or indirect outsourcing of some or all of the development work for a system, respectively. While some project managers see such outsourcing of development as reducing the overall risk, each integrated component can bring with it significantly increased risks to system quality. Read this software testing article to learn about the factors that lead to these risks ,and strategies you can use to manage them.
|